predictor class
- North America > United States (0.14)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- North America > Canada (0.04)
- (2 more...)
paper to be clearly written, technically sound, and the results to be of interest to the (fair) ML community
We thank the reviewers for their thorough and positive reviews. We will of course incorporate all the suggested edits by the reviewers as well as more clarifications. We will restate the theorem statement so it would state precisely what is proven. In this paper, we chose to derive the generalization bounds using Graph dimension and VC-dimension. Thank you for your positive review.
- North America > United States (0.14)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- North America > Canada (0.04)
- (2 more...)
paper to be clearly written, technically sound, and the results to be of interest to the (fair) ML community
We thank the reviewers for their thorough and positive reviews. We will of course incorporate all the suggested edits by the reviewers as well as more clarifications. We will restate the theorem statement so it would state precisely what is proven. In this paper, we chose to derive the generalization bounds using Graph dimension and VC-dimension. Thank you for your positive review.
Sample Complexity of Uniform Convergence for Multicalibration
Shabat, Eliran, Cohen, Lee, Mansour, Yishay
There is a growing interest in societal concerns in machine learning systems, especially in fairness. Multicalibration gives a comprehensive methodology to address group fairness. In this work, we address the multicalibration error and decouple it from the prediction error. The importance of decoupling the fairness metric (multicalibration) and the accuracy (prediction error) is due to the inherent trade-off between the two, and the societal decision regarding the "right tradeoff" (as imposed many times by regulators). Our work gives sample complexity bounds for uniform convergence guarantees of multicalibration error, which implies that regardless of the accuracy, we can guarantee that the empirical and (true) multicalibration errors are close. We emphasize that our results: (1) are more general than previous bounds, as they apply to both agnostic and realizable settings, and do not rely on a specific type of algorithm (such as deferentially private), (2) improve over previous multicalibration sample complexity bounds and (3) implies uniform convergence guarantees for the classical calibration error.
- North America > United States (0.14)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
Machine learning pt.1: Artificial Neural Networks
We want to apply new data to our network and classify inputs If we overtrain / overfit our network to our training data then our accuracy will be deceiving. It might work very well for training data, but will not work on test data. In order to prevent overfitting we implement preprocessing techniques and tune our hyper parameters.
- North America > United States > California > Los Angeles County > Los Angeles (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)